home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Ian & Stuart's Australian Mac: Not for Sale
/
Another.not.for.sale (Australia).iso
/
fade into you
/
being there
/
Issues & Ideas
/
Agents
/
Abstracts
next >
Wrap
Text File
|
1994-10-02
|
4KB
|
79 lines
ABSTRACTS
COLLABORATIVE INTERFACE AGENTS
Accepted to AAAI '94
Y. Lashkari, M. Metral, P. Maes
Postscript Version or HTML Version
Interface agents are semi-intelligent systems which assist users with
daily computer-based tasks. Recently, various researchers have
proposed a learning approach towards building such agents and some
working prototypes have been demonstrated. Such agents learn by
`watching over the shoulder' of the user and detecting patterns and
regularities in the user's behaviour. Despite the successes booked, a
major problem with the learning approach is that the agent has to
learn from scratch and thus takes some time becoming useful. Secondly,
the agent's competence is necessarily limited to actions it has seen
the user perform. Collaboration between agents assisting different
users can alleviate both of these problems. We present a framework for
multi-agent collaboration and discuss results of a working prototype,
based on learning agents for electronic mail.
NEWT
HTML Version
PAYING ATTENTION TO WHAT'S IMPORTANT
Accepted to SAB '94
L. Foner, P. Maes
Postscript Version
Adaptive autonomous agents have to learn about the effects of their
actions so as to be able to improve their performance and adapt to
long term changes. The problem of correlating actions with changes in
sensor data is O(n2) and therefore computationally intractable for any
non-trivial application. We propose to make this problem more
manageable by using focus of attention. In particular, we discuss two
complementary methods for focus of attention: perceptual selectivity
restricts the set of sensor data the agent attends to at a particular
point in time, while cognitive selectivity restricts the set of
internal structures that is updated at a particular point in time. We
present results of an implemented algorithm, a variant of the schema
mechanism [Drescher 91], which employs these two forms of focus of
attention. The results demonstrate that incorporating focus of
attention drastically improves the tractability of learning action
models without affecting the quality of the knowledge learned, at the
relatively small cost of doubling the number of training examples
required to learn the same knowledge.
EVOLVING VISUAL ROUTINES
Accepted to Alife-IV.
Michael Patrick Johnson, Prof. Pattie Maes and Trevor Darrell.
Compressed Postscript Version
Traditional machine vision assumes that the vision system recovers a
complete, labeled description of the world [Marr]. Recently, several
researchers have criticized this model and proposed an alternative
model which considers perception as a distributed collection of
task-specific, task-driven visual routines [Aloimonos, Ullman]. Some
of these researchers have argued that in natural living systems these
visual routines are the product of natural selection [ramachandran].
So far, researchers have hand-coded task-specific visual routines for
actual implementations (e.g. [Chapman]). In this paper we propose an
alternative approach in which visual routines for simple tasks are
evolved using an artificial evolution approach. We present results
from a series of runs on actual camera images, in which simple
routines were evolved using Genetic Programming techniques [Koza]. The
results obtained are promising: the evolved routines are able to
correctly classify up to 93% of the images, which is better than the
best algorithm we were able to write by hand.
_________________________________________________________________
The Autonomous Agents Group / MIT Media Lab /
agentmaster@media.mit.edu